KMID : 1038320230200010017
|
|
º¸°ÇÀÇ·á±³À°Æò°¡ 2023 Volume.20 No. 1 p.17 ~ p.17
|
|
Comparing ChatGPT¡¯s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
|
|
Chao-Cheng Lin
Zaine Akuhata-Huntington Che-Wei Hsu
|
|
Abstract
|
|
|
Learning about one¡¯s implicit bias is crucial for improving one¡¯s cultural competency and thereby reducing health inequity. To evaluate bias among medical students following a previously developed cultural training program targeting New Zealand M?ori, we developed a text-based, self-evaluation tool called the Similarity Rating Test (SRT). The development process of the SRT was resource-intensive, limiting its generalizability and applicability. Here, we explored the potential of ChatGPT, an automated chatbot, to assist in the development process of the SRT by comparing ChatGPT¡¯s and students¡¯ evaluations of the SRT. Despite results showing non-significant equivalence and difference between ChatGPT¡¯s and students¡¯ ratings, ChatGPT¡¯s ratings were more consistent than students¡¯ ratings. The consistency rate was higher for non-stereotypical than for stereotypical statements, regardless of rater type. Further studies are warranted to validate ChatGPT¡¯s potential for assisting in SRT development for implementation in medical education and evaluation of ethnic stereotypes and related topics.
|
|
KEYWORD
|
|
Artificial intelligence, Cultural competency, Implicit bias, Medical education, New Zealand
|
|
FullTexts / Linksout information
|
|
|
|
Listed journal information
|
|
|